Goto

Collaborating Authors

 belief propagation neural network


Belief Propagation Neural Networks

Neural Information Processing Systems

Learned neural solvers have successfully been used to solve combinatorial optimization and decision problems. More general counting variants of these problems, however, are still largely solved with hand-crafted solvers. To bridge this gap, we introduce belief propagation neural networks (BPNNs), a class of parameterized operators that operate on factor graphs and generalize Belief Propagation (BP). In its strictest form, a BPNN layer (BPNN-D) is a learned iterative operator that provably maintains many of the desirable properties of BP for any choice of the parameters. Empirically, we show that by training BPNN-D learns to perform the task better than the original BP: it converges 1.7x faster on Ising models while providing tighter bounds. On challenging model counting problems, BPNNs compute estimates 100's of times faster than state-of-the-art handcrafted methods, while returning an estimate of comparable quality.


Review for NeurIPS paper: Belief Propagation Neural Networks

Neural Information Processing Systems

Weaknesses: The absence of any explaining figures makes it hard for the reader to fully understand the underlying architecture of BPNNs. Specifically, it is not clear what the actual target of a BPNN acting on a particular factor graph is! Is it the partition function? Moreover, the intention of the Bethe Free Energy Layer remains obscure. In lines 121-125 you mention: 'When convergence to a fixed point is unnecessary, we can increase the flexibility [...] Additionally we define a Bethe free energy layer using two MLPs that take the trajectories of learned beliefs from each factor and variable as input and output scalars.'


Review for NeurIPS paper: Belief Propagation Neural Networks

Neural Information Processing Systems

The reviewers felt the paper provides a valuable connection between modern graph neural networks and belief propagation. There was some confusion about how to train the BPNNs, and the paper should be revised to clarify these points. The authors should also expand on discussion about marginal estimation, even if is to highlight limitations of the proposed approach.


Belief Propagation Neural Networks

Neural Information Processing Systems

Learned neural solvers have successfully been used to solve combinatorial optimization and decision problems. More general counting variants of these problems, however, are still largely solved with hand-crafted solvers. To bridge this gap, we introduce belief propagation neural networks (BPNNs), a class of parameterized operators that operate on factor graphs and generalize Belief Propagation (BP). In its strictest form, a BPNN layer (BPNN-D) is a learned iterative operator that provably maintains many of the desirable properties of BP for any choice of the parameters. Empirically, we show that by training BPNN-D learns to perform the task better than the original BP: it converges 1.7x faster on Ising models while providing tighter bounds.